Existing approaches for vision-and-language navigation (VLN) are mainly based on cross-modal reasoning over discrete views. However, this scheme may hamper an agent's spatial and numerical reasoning because of incomplete objects within a single view and duplicate observations across views. A potential solution is mapping discrete views into a unified birds's-eye view, which can aggregate partial and duplicate observations. Existing metric maps could achieve this goal, but they suffer from less expressive semantics (e.g. usually predefined labels) and limited map size, which weakens an agent's language grounding and long-term planning ability. Inspired by the robotics community, we introduce hybrid topo-metric maps into VLN, where a topological map is used for long-term planning and a metric map for short-term reasoning. Beyond mapping with more expressive deep features, we further design a pre-training framework via the hybrid map to learn language-informed map representations, which enhances cross-modal grounding and facilitates the final language-guided navigation goal. Extensive experiments demonstrate the effectiveness of the map-based route for VLN, and the proposed method sets the new state-of-the-art on three VLN benchmarks.
translated by 谷歌翻译
We propose eXtensible Prompt (X-Prompt) for prompting a large language model (LLM) beyond natural language (NL). X-Prompt instructs an LLM with not only NL but also an extensible vocabulary of imaginary words that are introduced to help represent what NL words hardly describe, allowing a prompt to be more descriptive. Like NL prompts, X-Prompt is out-of-distribution (OOD) robust, for which we propose context-guided learning with prompt augmentation to learn its imaginary words for general usability, enabling them to use in different prompt contexts for fine-grain specifications. The promising results of X-Prompt demonstrate its potential of approaching advanced interaction between humans and LLMs to bridge their communication gap.
translated by 谷歌翻译
There is a key problem in the medical visual question answering task that how to effectively realize the feature fusion of language and medical images with limited datasets. In order to better utilize multi-scale information of medical images, previous methods directly embed the multi-stage visual feature maps as tokens of same size respectively and fuse them with text representation. However, this will cause the confusion of visual features at different stages. To this end, we propose a simple but powerful multi-stage feature fusion method, MF2-MVQA, which stage-wise fuses multi-level visual features with textual semantics. MF2-MVQA achieves the State-Of-The-Art performance on VQA-Med 2019 and VQA-RAD dataset. The results of visualization also verify that our model outperforms previous work.
translated by 谷歌翻译
Aspect-based sentiment analysis (ABSA) aims at extracting opinionated aspect terms in review texts and determining their sentiment polarities, which is widely studied in both academia and industry. As a fine-grained classification task, the annotation cost is extremely high. Domain adaptation is a popular solution to alleviate the data deficiency issue in new domains by transferring common knowledge across domains. Most cross-domain ABSA studies are based on structure correspondence learning (SCL), and use pivot features to construct auxiliary tasks for narrowing down the gap between domains. However, their pivot-based auxiliary tasks can only transfer knowledge of aspect terms but not sentiment, limiting the performance of existing models. In this work, we propose a novel Syntax-guided Domain Adaptation Model, named SDAM, for more effective cross-domain ABSA. SDAM exploits syntactic structure similarities for building pseudo training instances, during which aspect terms of target domain are explicitly related to sentiment polarities. Besides, we propose a syntax-based BERT mask language model for further capturing domain-invariant features. Finally, to alleviate the sentiment inconsistency issue in multi-gram aspect terms, we introduce a span-based joint aspect term and sentiment analysis module into the cross-domain End2End ABSA. Experiments on five benchmark datasets show that our model consistently outperforms the state-of-the-art baselines with respect to Micro-F1 metric for the cross-domain End2End ABSA task.
translated by 谷歌翻译
标记医学图像取决于专业知识,因此很难在短时间内以高质量获取大量注释的医学图像。因此,在小型数据集中充分利用有限标记的样品来构建高性能模型是医疗图像分类问题的关键。在本文中,我们提出了一个深入监督的层选择性注意网络(LSANET),该网络全面使用功能级和预测级监督中的标签信息。对于特征级别的监督,为了更好地融合低级功能和高级功能,我们提出了一个新颖的视觉注意模块,层选择性注意(LSA),以专注于不同层的特征选择。 LSA引入了一种权重分配方案,该方案可以在整个训练过程中动态调整每个辅助分支的加权因子,以进一步增强深入监督的学习并确保其概括。对于预测级的监督,我们采用知识协同策略,通过成对知识匹配来促进所有监督分支之间的层次信息互动。使用公共数据集MedMnist,这是用于涵盖多种医学专业的生物医学图像分类的大规模基准,我们评估了LSANET在多个主流CNN体系结构和各种视觉注意模块上评估。实验结果表明,我们所提出的方法对其相应的对应物进行了实质性改进,这表明LSANET可以为医学图像分类领域的标签有效学习提供有希望的解决方案。
translated by 谷歌翻译
在许多综合设置(例如视频游戏)和GO中,增强学习(RL)超出了人类的绩效。但是,端到端RL模型的现实部署不太常见,因为RL模型对环境的轻微扰动非常敏感。强大的马尔可夫决策过程(MDP)框架(其中的过渡概率属于名义模型设置的不确定性)提供了一种开发健壮模型的方法。虽然先前的分析表明,RL算法是有效的,假设访问生成模型,但尚不清楚RL在更现实的在线设置下是否可以有效,这需要在探索和开发之间取得仔细的平衡。在这项工作中,我们通过与未知的名义系统进行互动来考虑在线强大的MDP。我们提出了一种强大的乐观策略优化算法,该算法可有效。为了解决由对抗性环境引起的其他不确定性,我们的模型具有通过Fenchel Conjugates得出的新的乐观更新规则。我们的分析确定了在线强大MDP的第一个遗憾。
translated by 谷歌翻译
最近,3D视觉和语言任务吸引了不断增长的研究兴趣。与其他视觉和语言任务相比,3D视觉问题回答(VQA)任务的利用较小,并且更容易受到语言先验和共同参考的歧义。同时,由于规模和注释方法有限,最近提出的几个3D VQA数据集并不能很好地支持3D VQA任务。在这项工作中,我们通过收集一个新的3D VQA数据集(称为FE-3DGQA),正式定义和解决3D接地的VQA任务,并具有多样化且相对自由形式的提问,以及密集和完全接地的边界框注释。为了获得更多可解释的答案,我们标记了出现在复杂的质量检查对中的对象,该对象具有不同的语义类型,包括答案接地的对象(均出现并未出现在问题中),以及用于答案的对象的上下文对象。我们还提出了一个新的3D VQA框架,以有效地预测完全视觉扎根和可解释的答案。广泛的实验证明,我们新收集的基准数据集可有效地用于评估不同方面的各种3D VQA方法,而我们新提出的框架也可以在新的基准数据集中实现最新的性能。新收集的数据集和我们的代码都将在http://github.com/zlccccc/3dgqa上公开获得。
translated by 谷歌翻译
最近,基于得分的扩散模型在MRI重建中表现出令人满意的性能。这些方法中的大多数都需要大量完全采样的MRI数据作为培训集,有时在实践中很难获得。本文提出了用于MRI重建的完全采样的基于无DATA的分数扩散模型,该模型以不足的采样数据以自我监督的方式学习了完全采样的MR图像。具体而言,我们首先通过贝叶斯深度学习从未采样的数据中推断出完全采样的MR图像分布,然后通过训练分数函数来扰动数据分布并近似其概率密度梯度。利用学到的分数函数为先验,我们可以通过执行条件的Langevin Markov链蒙特卡洛(MCMC)采样来重建MR图像。公共数据集的实验表明,所提出的方法优于现有的自我监督的MRI重建方法,并与常规(完全采样的数据训练)基于得分的扩散方法实现可比性的性能。
translated by 谷歌翻译
点云注册旨在估计两点云扫描之间的几何变换,在该点对应的估计中是其成功的关键。除了先前通过手工制作或学习的几何特征寻求对应的方法外,最近的点云注册方法还尝试应用RGB-D数据以实现更准确的对应关系。但是,有效地融合了这两种独特方式的几何和视觉信息并不是微不足道的,尤其是对于注册问题而言。在这项工作中,我们提出了一种新的几何感知视觉特征提取器(给出),该提取器采用多尺度的本地线性转换来逐步融合这两种方式,其中深度数据的几何特征是几何依赖于几何依赖的卷积内核来转换RGB数据的视觉功能。最终的视觉几何特征位于典型的特征空间中,由于几何变化引起的视觉差异可缓解,因此可以实现更可靠的对应关系。提出的给出的模块可以很容易地插入最近的RGB-D点云注册框架中。在3D匹配和扫描仪上进行的广泛实验表明,即使没有信件或姿势监督,我们的方法即使在没有通信或姿势的情况下也优于最先进的点云注册方法。该代码可在以下网址获得:https://github.com/514DNA/llt。
translated by 谷歌翻译
磁共振成像是临床诊断的重要工具。但是,它遭受了漫长的收购时间。深度学习的利用,尤其是深层生成模型,在磁共振成像中提供了积极的加速和更好的重建。然而,学习数据分布作为先验知识并从有限数据中重建图像仍然具有挑战性。在这项工作中,我们提出了一种新颖的Hankel-K空间生成模型(HKGM),该模型可以从一个k-空间数据的训练集中生成样品。在先前的学习阶段,我们首先从k空间数据构建一个大的Hankel矩阵,然后从大型Hankel矩阵中提取多个结构化的K空间贴片,以捕获不同斑块之间的内部分布。从Hankel矩阵中提取斑块使生成模型可以从冗余和低级别的数据空间中学习。在迭代重建阶段,可以观察到所需的解决方案遵守学识渊博的先验知识。通过将其作为生成模型的输入来更新中间重建解决方案。然后,通过对测量数据对其Hankel矩阵和数据一致性组合施加低排名的惩罚来替代地进行操作。实验结果证实,单个K空间数据中斑块的内部统计数据具有足够的信息来学习强大的生成模型并提供最新的重建。
translated by 谷歌翻译